language: | |
- fr | |
- it | |
- de | |
- es | |
- en | |
license: apache-2.0 | |
library_name: mlx | |
tags: | |
- moe | |
inference: false | |
# Model Card for Mixtral-8x7B | |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. | |
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). | |
## Instruction format | |
This format must be strictly respected, otherwise the model will generate sub-optimal outputs. | |
The template used to build a prompt for the Instruct model is defined as follows: | |
``` | |
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] | |
``` | |
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. | |
## Run the model | |
```bash | |
# Install mlx, mlx-examples, huggingface-cli | |
pip install mlx | |
pip install huggingface_hub hf_transfer | |
git clone https://github.com/ml-explore/mlx-examples.git | |
# Download model | |
export HF_HUB_ENABLE_HF_TRANSFER=1 | |
huggingface-cli download --local-dir Mixtral-8x7B-Instruct-v0.1 mlx-community/Mixtral-8x7B-Instruct-v0.1 | |
# Run example | |
python mlx-examples/mixtral/mixtral.py --model_path Mixtral-8x7B-Instruct-v0.1 | |
``` |