MLX
Safetensors
mixtral
reach-vb HF staff commited on
Commit
41ec237
1 Parent(s): dbb50e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -3
README.md CHANGED
@@ -13,9 +13,6 @@ The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mi
13
 
14
  For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
15
 
16
- ## Warning
17
- This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
18
-
19
  ## Instruction format
20
 
21
  This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
 
13
 
14
  For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
15
 
 
 
 
16
  ## Instruction format
17
 
18
  This format must be strictly respected, otherwise the model will generate sub-optimal outputs.