|
--- |
|
license: apache-2.0 |
|
language: |
|
- fr |
|
- it |
|
- de |
|
- es |
|
- en |
|
inference: false |
|
--- |
|
# Model Card for Mixtral-8x7B |
|
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. |
|
|
|
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). |
|
|
|
## Instruction format |
|
|
|
This format must be strictly respected, otherwise the model will generate sub-optimal outputs. |
|
|
|
The template used to build a prompt for the Instruct model is defined as follows: |
|
``` |
|
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] |
|
``` |
|
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. |
|
|
|
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: |
|
```python |
|
def tokenize(text): |
|
return tok.encode(text, add_special_tokens=False) |
|
|
|
[BOS_ID] + |
|
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + |
|
tokenize(BOT_MESSAGE_1) + [EOS_ID] + |
|
… |
|
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + |
|
tokenize(BOT_MESSAGE_N) + [EOS_ID] |
|
``` |
|
|
|
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. |
|
|
|
## Run the model |