MLX
Safetensors
mixtral

Update README.md

#9
by reach-vb HF staff - opened
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -7,6 +7,7 @@ language:
7
  - es
8
  - en
9
  inference: false
 
10
  ---
11
  # Model Card for Mixtral-8x7B
12
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
 
7
  - es
8
  - en
9
  inference: false
10
+ library_name: mlx
11
  ---
12
  # Model Card for Mixtral-8x7B
13
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.