Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
wolfram
/
miquliz-120b-v2.0-3.0bpw-h6-exl2
like
15
Text Generation
Transformers
Safetensors
5 languages
llama
mergekit
Merge
conversational
text-generation-inference
Inference Endpoints
arxiv:
2203.05482
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Very interesting that miqu will give 16k context work even only first layer and last layer
15
#2 opened 9 months ago by
akoyaki
VRAM requirements
9
#1 opened 10 months ago by
sophosympatheia