Performance Models (Old and MoE merge)
Collection
7 items
•
Updated
•
2
I don't think GGUF version can use. Haven't test yet. But expect this is very good model.
Inixion-v2, My best MoErge model for now, have a capable of RP and general task. It can:
Some still not good:
https://huggingface.co/Alsebay/Inixion-2x8B-v2-GGUF
My own GGUF quantz, please use fixed version.
you can find other GGUF version in my friend https://huggingface.co/mradermacher
Here is imatrix GGUF version https://huggingface.co/mradermacher/Inixion-2x8B-v2-i1-GGUF
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
source_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
source_model: Sao10K/L3-8B-Stheno-v3.2
All is very smart model, so you could enjoy it