base_model: NeuralNovel/Mini-Mixtral-v0.2 | |
inference: false | |
license: apache-2.0 | |
merged_models: | |
- unsloth/mistral-7b-v0.2 | |
- mistralai/Mistral-7B-Instruct-v0.2 | |
pipeline_tag: text-generation | |
quantized_by: Suparious | |
tags: | |
- moe | |
- frankenmoe | |
- merge | |
- mergekit | |
- lazymergekit | |
- unsloth/mistral-7b-v0.2 | |
- mistralai/Mistral-7B-Instruct-v0.2 | |
- quantized | |
- 4-bit | |
- AWQ | |
- text-generation | |
- autotrain_compatible | |
- endpoints_compatible | |
- chatml | |
# NeuralNovel/Mini-Mixtral-v0.2 AWQ | |
- Model creator: [NeuralNovel](https://huggingface.co/NeuralNovel) | |
- Original model: [Mini-Mixtral-v0.2](https://huggingface.co/NeuralNovel/Mini-Mixtral-v0.2) | |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/DOoAs2yzNOUC465BSM9-s.jpeg) | |
## Model Summary | |
Mini-Mixtral-v0.2 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): | |
* [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) | |
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | |