Edit model card

Quantizations of https://huggingface.co/mistralai/Ministral-8B-Instruct-2410

Inference Clients/UIs


From original readme

We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.

The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.

If you are interested in using Ministral-3B or Ministral-8B commercially, outperforming Mistral-7B, reach out to us.

For more details about les Ministraux please refer to our release blog post.

Ministral 8B Key features

  • Released under the Mistral Research License, reach out to us for a commercial license
  • Trained with a 128k context window with interleaved sliding-window attention
  • Trained on a large proportion of multilingual and code data
  • Supports function calling
  • Vocabulary size of 131k, using the V3-Tekken tokenizer

Basic Instruct Template (V3-Tekken)

<s>[INST]user message[/INST]assistant response</s>[INST]new user message[/INST]
Downloads last month
679
GGUF
Model size
8.02B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.