Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mistralai
/
Mixtral-8x7B-v0.1
like
1.65k
Follow
Mistral AI_
2,889
Text Generation
Transformers
Safetensors
5 languages
mixtral
Mixture of Experts
text-generation-inference
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
64
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (12)
No multi GPU inference support?
8
#4 opened 12 months ago by
dataautogpt3
Is it possible to get GPTQ quants in 4 bpw?
1
#2 opened 12 months ago by
MrHillsss
Previous
1
2
Next