metadata
license: apache-2.0
tags:
- mistral
- conversational
- text-generation-inference
base_model: BeaverAI/mistral-doryV2-12b
library_name: transformers
Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.
Flash-Attention seems to have seem weird effects with the model as well, however there is no confirmation on this.
Original Model:
BeaverAI/mistral-doryV2-12b
How to Use: llama.cpp
License:
Apache 2.0
Quants
Name | Quant Type | Size |
---|---|---|
mistral-doryV2-12b-Q2_K.gguf | Q2_K | 4.79 GB |
mistral-doryV2-12b-Q3_K_M.gguf | Q3_K_M | 6.08 GB |
mistral-doryV2-12b-Q4_K_M.gguf | Q4_K_M | 7.48 GB |
mistral-doryV2-12b-Q5_K_M.gguf | Q5_K_M | 8.73 GB |
mistral-doryV2-12b-Q6_K.gguf | Q6_K | 10.1 GB |
mistral-doryV2-12b-Q8_0.gguf | Q8_0 | 13 GB |