CleverQwen2-1.5B
This is a merge of pre-trained language models created using mergekit.
It has grown by about 300M parameters and I don't know why. I would like to know though. It works as expexted - amazing - I just can't see any reason for the Qwen2 models to gain parameters when merged.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using trollek/Qwen2-1.5B-Instruct-Abliterated as a base.
Models Merged
The following models were included in the merge:
- cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
- M4-ai/Hercules-5.0-Qwen2-1.5B
- Replete-AI/Replete-Coder-Qwen2-1.5b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Replete-AI/Replete-Coder-Qwen2-1.5b
- model: M4-ai/Hercules-5.0-Qwen2-1.5B
- model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
merge_method: model_stock
base_model: trollek/Qwen2-1.5B-Instruct-Abliterated
architecture: qwen2
dtype: bfloat16
Quants
Ollama
ollama pull trollek/cleverqwen2:1.5b-q4_k_s
ollama pull trollek/cleverqwen2:1.5b-q5_k_s
ollama pull trollek/cleverqwen2:1.5b-q6_k
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for trollek/CleverQwen2-1.5B
Merge model
this model