license: apache-2.0 | |
library_name: transformers | |
base_model: | |
- Qwen/Qwen2.5-14B-Instruct | |
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) | |
# QuantFactory/Rombos-LLM-V2.6-Qwen-14b-GGUF | |
This is quantized version of [rombodawg/Rombos-LLM-V2.6-Qwen-14b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b) created using llama.cpp | |
# Original Model Card | |
# Rombos-LLM-V2.5-Qwen-14b | |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/LbnAeRIHQhRH_dVxfcHOw.jpeg) | |
Rombos-LLM-V2.6-Qwen-14b is the upgraded version of "rombodawg/Rombos-LLM-V2.5-Qwen-14b". The magic I performed to make this model better than it already was is only known to the Deepest state, dankest memers and God himself, so dont ask ๐. But it does perform a decent bit better than version 2.5 from my hand testing. Benchmarks will come later. | |
Check out the Continuous Finetuning method that I apply to all my models bellow: | |
- https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing | |
Quants: | |
- https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b-Q8_0-GGUF | |
- https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b-Q5_K_M-GGUF | |
Benchmarks: (Coming soon) | |